I'm trying to do this homework https://www.root-me.org/en/Challenges/Cryptanalysis/File-PKZIP When I write a function to crack it.
import subprocess from time import sleep
file = open('/home/begood/Downloads/SecLists-master/Passwords/'
'rockyou-75.txt', 'r') lines = file.readlines() file.close() for line in lines:
command = 'unzip -P ' + line.strip() + ' /home/begood/Downloads/ch5.zip'
print command
p = subprocess.Popen(
command,
stdout=subprocess.PIPE, shell=True).communicate()[0]
if 'replace' in p:
print 'y\n'
sleep(1)
It stop in password = scooter:
unzip -P scooter /home/begood/Downloads/ch5.zip replace readme.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename:
but when I use it to unzip it said:
inflating: /home/begood/readme.txt
error: invalid compressed data to inflate
And it real password is : 14535. Why pkzip accept two password?
I presume that the encryption being used is the old, very weak, encryption that was part of the original PKZIP format.
That encryption method has a 12-byte salt header before the compressed data. From the PKWare specification:
After the header is decrypted, the last 1 or 2 bytes in Buffer
should be the high-order word/byte of the CRC for the file being
decrypted, stored in Intel low-byte/high-byte order. Versions of
PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is
used on versions after 2.0. This can be used to test if the password
supplied is correct or not.
It was originally two bytes in the 1.0 specification, but in the 2.0 specification, and in the associated version of PKZIP, the check value was changed to one byte in order to make password searches like what you are doing more difficult. The result is that about one out of every 256 random passwords will result in passing that first check, and then proceeding to try to decompress the incorrectly decrypted compressed data, only then running into an error.
So it's far, far more than two passwords that will be "accepted". However it won't take very many bytes of decompressed data to detect that the password was nevertheless incorrect.
Related
My question almost exactly the same as this one which is unanswered. I am trying to read the binary data of a .jpg to send as an HTTP response on a simple web server using C++. The code for reading the data is below.
FILE *f = fopen(file.c_str(),"rb");
if(f){
fseek(f,0,SEEK_END);
int length = ftell(f);
fseek(f,0,SEEK_SET);
char* buffer = (char*)malloc(length+1);
if(buffer){
int b = fread(buffer,1,length,f);
std::cout << "bytes read: " << b << std::endl;
}
fclose(f);
buffer[length] = '\0';
return buffer;
}
return NULL;
When the request for the image is made and this code runs, fread() returns 25253 bytes being read, which seems correct. However, when I perform strlen(buffer) I get only 4. Of course, this gives an error on a browser when the image tries to display. I have also tried manually setting the HTTP content length to 25253 but I then a receive a curl error 18, indicating the transfer ended early (as only 4 bytes exist).
As the other poster mentioned in their question, the 5th byte of the image (and I assume most .jpg images) is 0x00, but I am unsure if this has an effect on saving to the buffer.
I have verified the .jpg images I am loading are in the directory, valid, and display properly when opened normally. I have also tried 2 different methods of loading the binary data, and both also give only 4 bytes, so I am really at a loss. Any help is much appreciated.
When the request for the image is made and this code runs, fread()
returns 25253 bytes being read, which seems correct. However, when I
perform strlen(buffer) I get only 4.
Well there is your problem: You read binary data, not text, meaning that special characters like newline or the null character is not a something that indicates the structure of a text, its simple numbers.
strlen is a function to give you the count of characters other than '\0' or simply 0. However in a binary file like jpeg there a dozen of zeros usually in there, and because of a binary header structure, there seems to be always a zero at position 5 so, so strlen will stop at the first it found and return 4.
Also you seem confused by the fact that you try to send this "text interpreted" jpeg to a HTTP server. Of course it will complain, because you can not simply send binary data as text in HTTP, you either have to encode it, base64 is very popular, or set the content length header. Of course you also have to tell the HTTP client/server the type by setting the proper MIME header.
I have a server (python) that sends bytes from a file to a client in c++. I am using libcurl to make requests to the python server and flask to do all of the "hard" work for me in python. After i get the file bytes from the server, i want to write it to a zip file on the client side. Initially, i was going to use libcurl to do it for me, but i decided i didn't want to do that as it would require an extra function in my wrapper which is not necessary.
FILE* zip_file = fopen(zip_name, "wb");
//make request and store the bytes from the server in a string
fwrite(response_information.first.c_str(), sizeof(char), sizeof(response_information.first.c_str()), zip_file);
//response_information is a pair . First = std::string, Second = curl response code
I do plan on switching to fopen_s (safe version of fopen), but i want to get a working program first. This is part of a bigger project so i can't provide code that can be run. Some things to note that i think can be causing this: storing response as string then attempting to get the c string version and write it to the file. When storing the return value/code of fwrite, i get "8" which means "*" bytes written apparently. Also, when im on windows, it says that the file was modified after i run my program, but nothing is in the zip file itself. How can i write the response bytes to a file?
The third parameter in fwrite is a count of items to write. So sizeof doesn't seem to be the thing you need. response_information.first.c_str() is a pointer, so sizeof(response_information.first.c_str()) returns a pointer size. Here it should be:
fwrite(response_information.first.c_str(), sizeof(char), strlen(response_information.first.c_str()), zip_file);
or
fwrite(response_information.first.c_str(), sizeof(char), response_information.first.length(), zip_file);
I am extracting some data from a UNIVERSE system and want to encrypt it for transfer via email.
I am no UNIVERSE expert so am using bits and pieces we have found from around the internet and it "looks" like it is working BUT I just can't seem to decrypt the data.
Below is the script I have used based on code found on the web:
RESULT=''
ALGORITHM="rc2-cbc" ; * 128 bit rc2 algorithm in CBC mode
MYKEY="23232323" ; * HEX - Actual Key
IV= "12121212" ; * HEX - Initialization Vector
DATALOC=1 ; * Data in String
KEYLOC=1 ; * Key in String
ACTION=5 ; * Base64 encode after encryption
KEYACTION=1 ; * KEY_ACTUAL_OPENSSL
SALT='' ; * SALT not used
RESULTLOC=1 ; * Result in String RESULT
OPSTRING = ''
RETURN.CODE=ENCRYPT(ALGORITHM,ACTION,DATASTRING,DATALOC,MYKEY,KEYLOC,KEYACTION,SALT,IV,OPSTRING,RESULTLOC)
RETURN.CODE = OPSTRING
Below are a few data strings I have processed through this script and the resulting string:
INPUT 05KI
OUTPUT iaYoHzxYlmM=
INPUT 05FOAA
OUTPUT e0XB/jyE9ZM=
When I try to decode and decrypt the resulting OUTPUT with an online decrypter, I still get no results: https://www.tools4noobs.com/online_tools/decrypt/
I'm thinking it might be a character encoding issue or perhaps the encryption is not working but I have no idea how to resolve - we have been working on this for a few weeks and cannot get any data that is decryptable...
All setups and fields have been set based on this: https://www.dropbox.com/s/ban1zntdy0q27z3/Encrypt%20Function.pdf?dl=0
If I feed the base-64 encrypted string from your code back into the Unidata DECRYPYT function with the same parameters it decrypts just fine.
I suspect something funny is happening with the key. This page mentions something like that: https://u2devzone.rocketsoftware.com/accelerate/articles/data-encryption/data-encryption.html "Generating a suitable key is one of the thornier problems associated with encryption. Keys should be generated as random binary strings, making them obviously difficult to remember. Accordingly, it is probably more common for applications to supply a pass phrase to the ENCRYPT function and have the function internally generate the actual encryption key."
One option to remove the Universe ENCRYPT function from the picture is to use openSSL directly. It looks like the ENCRYPT/DECRYPT functions are just thin wrappers around the openSSL library, so you can execute that to get the result. I'm having problems with the php page you're using for verification, but if I feed the base-64 encrypted string to an openSSL decrypt command on a different machine, it decrypts fine.
MYKEY="A long secret key"
DATASTRING="data to be encrypted data here"
EXECUTE '!echo "':DATASTRING:'"| openssl enc -base64 -e -rc2-cbc -nosalt -k "':MYKEY:'"' CAPTURING RESULT
I have an encrypted binary file of size 256*N bytes.
The last two bytes of the first page(256 length) contains the IV value to decrypt.
If i fetch that using the below code:
infile.seek(240,0)
iv = infile.read(16)
(infile is input file). IV value is not matching to that in the bin file.
Also, is it fine if i just send this "iv" to AES.new ? as below code?
decryptor = AES.new(key, AES.MODE_CBC, iv)
Also, if i have to send a hard coded IV value to AES new function, in what format i need to send it? i have a 16 bytes HEX value and i need to convert it into a byte string right?
Please let me know how to do it.
First question
Yes, that seems to be the proper method to read the IV. Make sure you opened the file in binary mode though, not in text mode.
Second question
Yes, if the IV is a 16 byte binary value that would be correct.
Third question
Using a constant IV would defeat the purpose of the IV altogether. But if you must use one that is specified as hexadecimals you should unhexlify it.
I'm using two different libraries to generate a SHA-1 hash for use in file validation - an older version of the Crypto++ library and the Digest::SHA1 class implemented by Ruby. While I've seen other instances of mismatched hashes caused by encoding differences, the two libraries are outputting hashes that are almost identical.
For instance, passing a file through each process produces the following results:
Crypto++
01c15e4f46d8181b984fa2a2c740f8f67130acac
Ruby:
eac15e4f46d8181b984fa2a2c740f8f67130acac
As you can see, only the first two characters of the hash string are different, and this behavior repeats itself over many files. I've taken a look at the source code for each implementation, and the only difference I found at first glance was in the data hex that is being used for the 160-bit hashing. I have no idea how that hex is used in the algorithm, and I figured it'd probably be quicker for me to ask the question in case anyone had encountered this issue before.
I've included the data from the respective libraries below. I also included the values from OpenSSL, since each of the three libraries had slightly different values.
Crypto++:
digest[0] = 0x67452301L;
digest[1] = 0xEFCDAB89L;
digest[2] = 0x98BADCFEL;
digest[3] = 0x10325476L;
digest[4] = 0xC3D2E1F0L;
Ruby:
context->state[0] = 0x67452301;
context->state[1] = 0xEFCDAB89;
context->state[2] = 0x98BADCFE;
context->state[3] = 0x10325476;
context->state[4] = 0xC3D2E1F0;
OpenSSL:
#define INIT_DATA_h0 0x67452301UL
#define INIT_DATA_h1 0xefcdab89UL
#define INIT_DATA_h2 0x98badcfeUL
#define INIT_DATA_h3 0x10325476UL
#define INIT_DATA_h4 0xc3d2e1f0UL
By the way, here is the code being used to generate the hash in Ruby. I do not have access to the source code for the Crypto++ implementation.
File.class_eval do
def self.hash_digest filename, options = {}
opts = {:buffer_length => 1024, :method => :sha1}.update(options)
hash_func = (opts[:method].to_s == 'sha1') ? Digest::SHA1.new : Digest::MD5.new
open(filename, "r") do |f|
while !f.eof
b = f.read
hash_func.update(b)
end
end
hash_func.hexdigest
end
end
I would guess that you are off by a byte in printing out the SHA-1 hashes. Can we see the code that is printing them? If not, here are a couple of potentially useful diagnostics:
Make a very short file (say, one word), and put its contents in as a hex string at http://www.fileformat.info/tool/hash.htm. You would need to know exactly the hex contents of the file, though. You can use xxd to that on Unix, but you'll have to watch out for endianness issues. I'm not sure how to do it on other OSs.
Does running the same file through the same SHA-1 implementation several times always print out the same value in that first byte? If so, does that value change when you change files?
This isn't making much sense. If there were something wrong with the SHA1 implementation, such as with those numbers, it would likely produce hashes that are completely different than the real SHA1 hashes, rather than just one byte off. Even if there were something wrong with your file reading loop, that it would drop a newline or something, you would still get a completely different hash by changing one byte in the stream, it wouldn't be one byte off of the real SHA1 hash.
If I do use your method in the following program, I get the correct results.
#!/usr/bin/env ruby
require 'digest/sha1'
require 'digest/md5'
File.class_eval do
def self.hash_digest filename, options = {}
opts = {:buffer_length => 1024, :method => :sha1}.update(options)
hash_func = (opts[:method].to_s == 'sha1') ? Digest::SHA1.new : Digest::MD5.new
open(filename, "r") do |f|
while !f.eof
b = f.read
hash_func.update(b)
end
end
hash_func.hexdigest
end
end
puts File.hash_digest(ARGV[0])
And its output compared with that of OpenSSL.
tmp$ dd if=/dev/urandom of=random.bin bs=1MB count=1
1+0 records in
1+0 records out
1000000 bytes (1.0 MB) copied, 0.287903 s, 3.5 MB/s
tmp$ ./digest.rb random.bin
a511d8153426ebea4e4694cde78db4e3a9e413d1
tmp$ openssl sha1 random.bin
SHA1(random.bin)= a511d8153426ebea4e4694cde78db4e3a9e413d1
So there's nothing wrong with your hashing method. Something is going wrong between its return value and it being printed.