I'm trying to bof a particular exploitme on DVL by redirecting input (to gets) using run < inputfile inside gdb
I can overflow the program successfully but am having trouble appending hex values to the string.. I have tried quotations, converting the value of the mem addr to ascii and various escape attempts (\,\,\) with no luck
Input file example:
AAAA\x42
In the above example it would appear that the backslash is being read as an ascii char (5c) and the value 42 remains in the stack (oddly?).
How would one go about specifying a hex value inside a gdb input file?
Thanks
Use perl! :)
reader#hacking:~/booksrc $ ./overflow_example $(perl -e 'print "A"x30')
with the 'e' option perl will evaluate the following command, and surrounding everything will treat the output of perl as a string. So the command above is identical to:
reader#hacking:~/booksrc $ ./overflow_example AAAAAAAAAAAAAAAAAAAAAAAAA
(adding x30 after a string will repeat it 30 times).
Of course perl accepts other hex values with the notation \x??. One more word, to concatenate strings use a dot:
reader#hacking:~/booksrc $ perl -e 'print "A"x20 . "BCD" . "\x61\x66\x67\x69" ;'
AAAAAAAAAAAAAAAAAAAABCDafgi
So you can redirect the output of perl in your input file or directly call perl in gdb when you run the program.
Related
There is a string located within a file that starts with 4bceb and is 32 characters long.
To find it I tried the following
Input:
find / -type f 2>/dev/null | xargs grep "4bceb\w{27}" 2>/dev/null
after entering the command it seems like the script is awaiting some additional command.
Your command seems alright in principle, i.e. it should correctly execute the grep command for each file find returns. However, I don't believe your regular expression (respectively the way you call grep) is correct for what you want to achieve.
First, in order to get your expression to work, you need to tell grep that you are using Perl syntax by specifying the -P flag.
Second, your regexp will return the full lines that contain sequences starting with "4bceb" that are at least 32 characters long, but may be longer as well. If, for example your ./test.txt file contents were
4bcebUUUUUUUUUUUUUUUUUUUUUUUU31
4bcebVVVVVVVVVVVVVVVVVVVVVVVVV32
4bcebWWWWWWWWWWWWWWWWWWWWWWWWWW33
sometext4bcebYYYYYYYYYYYYYYYYYYYYYYYYY32somemoretext
othertext 4bcebZZZZZZZZZZZZZZZZZZZZZZZZZ32 evenmoretext
your output would include all lines except the first one (in which the sequence is shorter than 32 characters). If you actually want to limit your results to lines that just contain sequences that are exactly 32 characters long, you can use the -w flag (for word-regexp) with grep, which would only return lines 2 and 5 in the above example.
Third, if you only want the match but not the surrounding text, the grep flag -o will do exactly this.
And finally, you don't need to pipe the find output into xargs, as grep can directly do what you want:
grep -rnPow / -e "4bceb\w{27}"
will recursively (-r) scan all files starting from / and return just the ones that contain matching words, along with the matches (as well as the line numbers they were found in, as result of the flag -n):
./test.txt:2:4bcebVVVVVVVVVVVVVVVVVVVVVVVVV32
./test.txt:5:4bcebZZZZZZZZZZZZZZZZZZZZZZZZZ32
sed -nE "s/(IMAGE)(.*)/\1\2/p" somefile > sfh.raw
somefile contains random ASCII as well as binary data after the image. The above sed command works, if there is no newline binary data in the file. If there is a newline it just outputs only until the new line, ignoring the rest of the file.
Is there a way we can make sed (.*) capture everything including the new line and continue until the end of the somefile content.
IMAGE254656
dsfdfdl;flkdfldsfkdsfkdlsfdfldfkdsfo;dsfkldsfdsfsd
Consider using awk for this:
awk '/^IMAGE/{i=1}i' somefile
awk processes a file line by line and allows you to set variables and check contents and lots of other very fancy stuff for each line.
This script checks each line to see if it starts with IMAGE. If so, it sets variable i to 1. Then it checks to see if i is set. If so, it does its default behavior of printing the line.
Information about the environment I am working in:
$ uname -a
AIX prd231 1 6 00C6B1F74C00
$ oslevel -s
6100-03-10-1119
Code Block A
( grep schdCycCleanup $DCCS_LOG_FILE | sed 's/[~]/ \
/g' | grep 'Move(s) Exist for cycle' | sed 's/[^0-9]*//g' ) > cycleA.txt
Code Block B
( grep schdCycCleanup $DCCS_LOG_FILE | sed 's/[~]/ \n/g' | grep 'Move(s) Exist for cycle' | sed 's/[^0-9]*//g' ) > cycleB.txt
I have two code blocks(shown above) that make use of sed to trim the input down to 6 digits but one command is behaving differently than I expected.
Sample of input for the two code blocks
Mar 25 14:06:16 prd231 ajbtux[33423660]: 20160325140616:~schd_cem_svr:1:0:SCHD-MSG-MOVEEXISTCYCLE:200705008:AUDIT:~schdCycCleanup - /apps/dccs/ajbtux/source/SCHD/schd_cycle_cleanup.c - line 341~ SCHD_CYCLE_CLEANUP - Move(s) Exist for cycle 389210~
I get the following output when the sample input above goes through the two code blocks.
cycleA.txt content
389210
cycleB.txt content
25140616231334236602016032514061610200705008341389210
I understand that my last piped sed command (sed 's/[^0-9]*//g') is deleting all characters other than numbers so I omitted it from the block codes and placed the output in two additional files. I get the following output.
cycleA1.txt content
SCHD_CYCLE_CLEANUP - Move(s) Exist for cycle 389210
cycleB1.txt content
Mar 25 15:27:58 prd231 ajbtux[33423660]: 20160325152758: nschd_cem_svr:1:0:SCHD-MSG-MOVEEXISTCYCLE:200705008:AUDIT: nschdCycCleanup - /apps/dccs/ajbtux/source/SCHD/schd_cycle_cleanup.c - line 341 n SCHD_CYCLE_CLEANUP - Move(s) Exist for cycle 389210 n
I can see that the first code block is removing every thing other that (SCHD_CYCLE_CLEANUP - Move(s) Exist for cycle 389210) and is using the tilde but the second code block is just replacing the tildes with the character n. I can also see that it is necessary in the first code block for a line break after this(sed 's/[~]/ ) and that is why I though having \n would simulate a line break but that is not the case. I think my different output results are because of the way regular expressions are being used. I have tried to look into regular expressions and searched about them on stackoverflow but did not obtain what I was looking for. Could someone explain how I can achieve the same result from code block B as code block A without having part of my code be on a second line?
Thank you in advance
This is an example of the XY problem (http://xyproblem.info/). You're asking for help to implement something that is the wrong solution to your problem. Why are you changing ~s to newlines, etc when all you need given your posted sample input and expected output is:
$ sed -n 's/.*schdCycCleanup.* \([0-9]*\).*/\1/p' file
389210
or:
$ awk -F'[ ~]' '/schdCycCleanup/{print $(NF-1)}' file
389210
If that's not all you need then please edit your question to clarify your requirements for WHAT you are trying to do (as opposed to HOW you are trying to do it) as your current approach is just wrong.
Etan Reisner's helpful answer explains the problem and offers a single-line solution based on an ANSI C-quoted string ($'...'), which is appropriate, given that you originally tagged your question bash.
(Ed Morton's helpful answer shows you how to bypass your problem altogether with a different approach that is both simpler and more efficient.)
However, it sounds like your shell is actually something different - presumably ksh88, an older version of the Korn shell that is the default sh on AIX 6.1 - in which such strings are not supported[1]
(ANSI C-quoted strings were introduced in ksh93, and are also supported not only in bash, but in zsh as well).
Thus, you have the following options:
With your current shell, you must stick with a two-line solution that contains an (\-escaped) actual newline, as in your code block A.
Note that $(printf '\n') to create a newline does not work, because command substitutions invariably trim all trailing newlines, resulting in the empty string in this case.
Use a more modern shell that supports ANSI C-quoted strings, and use Etan's answer. http://www.ibm.com/support/knowledgecenter/ssw_aix_61/com.ibm.aix.cmds3/ksh.htm tells me that ksh93 is available as an alternative shell on AIX 6.1, as /usr/bin/ksh93.
If feasible: install GNU sed, which natively understands escape sequences such as \n in replacement strings.
[1] As for what actually happens when you try echo 'foo~bar~baz' | sed $'s/[~]/\\\n/g' in a POSIX-like shell that does not support $'...': the $ is left as-is, because what follow is not a valid variable name, and sed ends up seeing literal $s/[~]/\\\n/g, where the $ is interpreted as a context address applying to the last input line - which doesn't make a difference here, because there is only 1 line. \\ is interpreted as plain \, and \n as plain n, effectively replacing ~ instances with literal \n sequences.
GNU sed handles \n in the replacement the way you expect.
OS X (and presumably BSD) sed does not. It treats it as a normal escaped character and just unescapes it to n. (Though I don't see this in the manual anywhere at the moment.)
You can use $'' quoting to use \n as a literal newline if you want though.
echo 'foo~bar~baz' | sed $'s/[~]/\\\n/g'
I'm having a problem using grep.
I have a file http://pastebin.com/HxAcciCa that I want to check for certain patterns. And when I"m trying to search for it grep returns all the lines provided that the pattern already exists in the given file.
To explain more this is the code that I'm running
grep -F "ENVIRO" "$file_pos" >> blah
No matter what else I try even if I provide a whole line as a pattern bash always returns all the lines.
These are variations of what I'm trying:
grep -F "E20" "$file_pos" >> blah
grep E20 "$file_pos" >> blah
grep C:\E20-II\ENVIRO\SSNHapACS480.dll "$file_pos" >> blah
grep -F C:\E20-II\ENVIRO\SSNHapACS480.dll "$file_pos" >> blah
Also for some strange reasons when adding the -x option to grep, it doesn't return any line despite the fact that the exact pattern exists.
I've searched the web and the bash documentation for the cause but couldn't find anything.
My final test was the following
grep -F -C 1 "E20" "$store_pos" >> blah #store_pos has the same value as $file_pos
I thought maybe it was printing the lines after the result but that was not the case.
I was using the blah file to see the output.
Also I'm using Linux mint rebecca.
Finally although the naming is quite familiar this question is not similiar to Why does grep match all lines for the pattern "\'"
And finally I would like to say that I am new to bash.
I suspect The error might be due to the main file http://pastebin.com/HxAcciCa rather than the code?
From the comments, it appears that the file has carriage returns delimiting the lines, rather than the linefeeds that grep expects; as a result, grep sees the file as one huge line, that either matches or fails to match as a whole.
(Note: there are at least three different conventions about how to delimit the lines in a "plain text" file -- unix uses newline (\n), DOS/Windows uses carriage return followed by newline (\r\n), and pre-OSX versions of MacOS used just carriage return (\r).)
I'm not clear on how your file wound up in this format, but you can fix it easily with:
tr '\r' '\n' <badfile >goodfile
or on the fly with:
tr '\r' '\n' <badfile | grep ...
Check the line endings in your input file: file, wc -l.
Check you are indeed using the correct grep: which grep.
Use > to redirect the output, or | more or | less to not be confused by earlier attempts you are appending to.
Edit: Looks like your file has the wrong line endings (old Mac OS (CR) perhaps). If you have dos2unix you can try to convert them to Unix style line endings (LF).
I don't have access to a PC at the moment, but what could possibly help you troubleshoot:
1. Use grep --color -F to see if it matches correctly.
2. After your statement, use | cat -A to see if there's any surprising control characters, lines should end in $, any other characters like \I or \M can sometimes be a headache.
I suspect number 2 as it seems to be Windows output. In which case you can cat filename | dos2unix | grep stmt should solve it
Did you save the dos2unix output as another file?
Just double check the file, it should be similar to this:
[root#pro-mon9001 ~]# cat -A Test.txt
Windows^M$
Style^M$
Files^M$
Are^M$
Hard ^M$
To ^M$
Parse^M$
[root#pro-mon9001 ~]# dos2unix Test.txt
dos2unix: converting file Test.txt to Unix format ...
[root#pro-mon9001 ~]# cat -A Test.txt
Windows$
Style$
Files$
Are$
Hard$
To$
Parse$
Now it should parse properly - so just verify that it did convert the file properly
Good luck!
I am trying to remove parts from a binary file which are between the ANSI strings "stringstart" and "stringend". Is it possible to do this with sed or perl -pe?
I am thinking about some Regex solution but I don't know how to write it or how well regex works with binary files.
sed is designed for handling text files rather than binary, though these days, the distinction is generally less significant than it once was. The biggest problem is that text files do not contain zero bytes (bytes with value 0) and binary files do, and many C string processing functions stop at the first zero byte. sed also reads 'lines' marked by newline characters. Binary files can end up with long lines as a result. Finally, there's no guarantee about the relative placement of the string start and end markers relative to newlines. All of these characteristics make sed less suitable for this job than Perl is.
In Perl, I'd be sorely tempted to slurp the file into memory, use an appropriate regex to zap the data from the memory image, and then write the result back to an appropriate location.
perl -e 'local($/); $data = <>; $data =~ s/stringstart(.*?)stringend//gms; print $data'
Now tested - test data created using:
#!/usr/bin/env perl
use strict;
use warnings;
sub full_set
{
foreach my $i (0..255) { printf "%c", $i; }
}
sub random_set
{
my($n) = #_;
foreach my $i (0..$n) { printf "%c", int(rand(255)); }
}
full_set;
random_set(1024);
printf("stringstart");
full_set;
random_set(512);
full_set;
printf("stringend");
random_set(256);
The script deletes 1045 characters from the input - which corresponds to 'stringstart', 'stringend' (20) + 2 * 256 + 513 (since random_set(512) prints 513 characters).
Note that the main script will read all files into memory at once. If you want it to process one file at a time, you'll have to work a little harder; it probably ceases to be a one-liner.
An alternate approach:
perl -pi -we'BEGIN{$/="stringend"} chomp and s/stringstart.*//s' your_binary_file
You can do a regular expression that kills all characters not defined after the ^ inside of []. For example
cp /bin/ls ./binfile
file binfile
binfile: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped
Do the perl pie on it:
perl -pi -e 's/[^[a-zA-Z0-9_+\n]//g' binfile
Then look at the binary file afterwards:
file binfile
binfile: ASCII text, with very long lines
You'll obviously have to add more to that command, as it'll get rid of several other would-be valid characters. But this should get you started.