How to generate BPF executable from the assembly - llvm

I want to convert a BPF assembly into executable.
For example, I got
entrypoint:
div32 r1, 1768515945
exit
Can I get its executable? It should be loaded and executed by the bpf vm.
Thanks.

Each instruction is 64 bits. This should assemble to:
00: 69 69 69 69 00 00 01 34
08: 00 00 00 00 00 00 00 90
The first instruction is from BPF_DIV | BPF_K | BPF_ALU | (1 << 8) | (1768515945 << 32). The second is just BPF_EXIT. For more information, see the kernel documentation. Note that exit expects r0 to contain a return code, but you haven't explicitly set any. It should default to 0.

Related

Wrong CRLF in UTF-16 stream?

Here is a problem I could not solve despite all my efforts. So I am totally stuck, please help!
For regular, “ASCII” mode the following simplified file and stream outputs
FILE *fa = fopen("utfOutFA.txt", "w");
fprintf(fa, "Line1\nLine2");
fclose(fa);
ofstream sa("utfOutSA.txt");
sa << "Line1\nLine2";
sa.close();
result, naturally, in exactly the same text files (hex dump):
00000000h: 4C 69 6E 65 31 0D 0A 4C 69 6E 65 32 ; Line1..Line2
where the new line \n is expanded to CRLF: 0D 0A – typical for Windows.
Now, we do the same for Unicode output, namely UTF-16 LE which is a sort of “default”. File output
FILE *fu = fopen("utfOutFU.txt", "w, ccs=UNICODE");
fwprintf(fu, L"Line1\nLine2");
fclose(fu);
results in this contents:
00000000h: FF FE 4C 00 69 00 6E 00 65 00 31 00 0D 00 0A 00 ; ÿþL.i.n.e.1.....
00000010h: 4C 00 69 00 6E 00 65 00 32 00 ; L.i.n.e.2.
which looks perfectly correct considering BOM and endianness, including CRLF: 0D 00 0A 00. However, the similar stream output
wofstream su("utfOutSU.txt");
su.imbue(locale(locale::empty(), new codecvt_utf16<wchar_t, 0x10ffffUL,
codecvt_mode(generate_header + little_endian)>));
su << L"Line1\nLine2";
su.close();
results in one byte less and overall incorrect text file:
00000000h: FF FE 4C 00 69 00 6E 00 65 00 31 00 0D 0A 00 4C ; ÿþL.i.n.e.1....L
00000010h: 00 69 00 6E 00 65 00 32 00 ; .i.n.e.2.
The reason is wrong expansion of CRLF: 0D 0A 00. Is this a bug? Or have I done something wrong?
I use Microsoft Visual Studio compiler (14.0 and other). I tried using stream endl instead of \n – same result! I tried to put su.imbue() first and then su.open() – all the same! I also checked the UTF-8 output (ccs=UTF-8 for file and codecvt_utf8 for stream) – no problem as CRLF stays the same as in ASCII mode: 0D 0A
I appreciate any ideas and comments on the issue.
When you are imbue()'ing a new locale into the std::wofstream, you are wiping out its original locale. Don't use locale::empty(), use su.getloc() instead, so the new locale copies the old locale before modifying it.
Also, on a side note, the last template parameter of codecvt_utf16 is a bitmask, so codecvt_mode(generate_header + little_endian) really should be std::generate_header | std::little_endian instead.
su.imbue(std::locale(su.getloc(), new codecvt_utf16<wchar_t, 0x10ffffUL,
std::generate_header | std::little_endian>));
I've discovered that this problem comes from the fact that you are writing the file in text mode. What you want to do is open your output file in binary and the issue will be solved, like so:
wofstream su("utfOutSU.txt", ofstream::out | ofstream::binary);
su.imbue(locale(su.getloc(), new codecvt_utf16<wchar_t, 0x10ffffUL,
codecvt_mode(generate_header + little_endian)>));
su << L"Line1\r\nLine2";
su.close();
Something "clever" is probably done behinde the scenes when writing text.

SMPP client is adding an ¿ to the end of the message

I'm trying to make a Windows desktop smpp client, and it's connecting and sending well, aside from a bug where an extra character (¿) is added to the end of the message content that I'm receiving on my phone.
So I'm sending "test" but my phone receives "test¿". Here's the contents of the pdu object, just before it gets sent:
size :58
sequence :2
cmd id :0x4
cmd status:0x0 : No Error
00000000 00 00 00 3a 00 00 00 04 00 00 00 00 00 00 00 02 |...:............|
00000010 00 05 00 74 65 73 74 66 72 6f 6d 00 01 01 34 34 |...testfrom...44|
00000020 37 37 37 37 37 37 37 37 37 37 00 00 00 00 00 00 |7777777777......|
00000030 00 00 00 00 05 74 65 73 74 00 |.....test.|
0000003a
I'm using this c++ smpp library as a base:
https://github.com/onlinecity/cpp-smpp
I had to make some slight changes to get it working on windows, but I don't think anything was changed that could have affected this.
Someone else ran a test using a different account on the smpp server, and their test added an # symbol instead.
Any ideas what could be causing this? Thanks!
Found the problem in the end, it was due to an option in the smpp library that defaults to true, called nullTerminateOctetStrings
It was adding the 00 to the end of the message, sound slike this was required by the SMPP 3.4 standards, but our smsc didn't like it. I suppose ideally I would fix the smsc, but that's provided by a 3rd party, so I've just switched off the null terminate instead.
Someone with a similar problem and more info here: https://www.mail-archive.com/devel#kannel.3glab.org/msg06765.html

Detecting Memory Leaks in C++ Windows application

I have a C++ windows application which has some memory leak issues. Is it possible to analyze the memory leak from the dump using NTSD ? If so please guide me how to do that ?
I have also heard that we can do it using User Mode Dump. I am not very familiar with finding leaks in windows. It is very easy in Linux with Valgrind.
Is there any other better options to check this ?
see here for details about Visual Leak Detector. I have used it on Windows.
all you do in your application is to
#include <vld.h>
and you will see a report about detected leaks in terminal when debugging your program, something like this:
---------- Block 1199 at 0x04BE1058: 136 bytes ----------
Call Stack:
d:\Foobar\FooLog.cpp (26): FooLog::getInstance
d:\Foobar\FooMain.cpp (75): FooMain::init
f:\dd\vctools\crt_bld\self_x86\crt\src\crtexe.c (578): __tmainCRTStartup
f:\dd\vctools\crt_bld\self_x86\crt\src\crtexe.c (403): WinMainCRTStartup
0x759A3677 (File and line number not available): BaseThreadInitThunk
0x770C9D42 (File and line number not available): RtlInitializeExceptionChain
0x770C9D15 (File and line number not available): RtlInitializeExceptionChain
Data:
9C 33 2D 6B 74 2A 2D 6B C8 11 BE 04 00 00 00 00 .3-kt*-k ........
00 00 00 00 70 14 BB 6C 70 14 BB 6C 00 00 00 00 ....p..l p..l....
00 00 00 00 68 14 BB 6C 68 14 BB 6C 00 00 00 00 ....h..l h..l....
00 00 00 00 6C 14 BB 6C 6C 14 BB 6C 20 12 BE 04 ....l..l l..l....
00 00 00 00 CD 00 CD CD 00 00 00 00 01 CD CD CD ........ ........
68 14 BB 6C 78 33 2D 6B 00 00 00 00 00 00 00 00 h..lx3-k ........
I've had great success tracking down memory and resource leaks with DrMemory. It works with both GCC and MSVC and it's very straight forward to use.

Code corresponding to leaks with Visual Leak Detector

I am trying to use Visual Leak Detector in Visual Studio 2008, here is an example of the output I get:
Detected memory leaks!
Dumping objects ->
{204} normal block at 0x036C1568, 1920 bytes long.
Data: < > 80 08 AB 03 00 01 AB 03 80 F9 AA 03 00 F2 AA 03
{203} normal block at 0x0372CC68, 40 bytes long.
Data: <( > 28 00 00 00 80 02 00 00 E0 01 00 00 01 00 18 00
{202} normal block at 0x0372CC00, 44 bytes long.
Data: << E > 3C 16 45 00 80 02 00 00 E0 01 00 00 01 00 00 00
The user's guide says to click on any line to jump to the corresponding file/line of code ; I tried clicking on every line but nothing happens! What am I missing?
Did you compile your code with optimization off and debug information on? Without this, it's unlikely to be able to link the addresses to your actual source code.
It could also be that the leak is occurring in code for which it can't find the source (for example an included library).
You should use deleaker. it must help you.

Are "#define new DEBUG_NEW" and "#undef THIS_FILE" etc. actually necessary?

When you create a new MFC application, the wizard creates the following block of code in almost every CPP file:
#ifdef _DEBUG
#define new DEBUG_NEW
#endif
and sometimes it also adds this:
#undef THIS_FILE
static char THIS_FILE[] = __FILE__;
I would like to remove this code from my CPP files if it is redundant. I am using an MFC app with C++/CLI on VS2008.
I have tried running in Debug after deleting this code from a CPP, and it seems to work fine. "new"ing variables work fine, there are no leaks, and ASSERT dialogs show the correct filename and jump to the offending line.
Can anyone tell me what it does and whether it's safe to delete it?
It is perfectly safe to delete this. It's a debugging aid; leaving it in will generate better details in the warnings in the output window of any memory leaks you have when the program exits. If you delete it, you still get the memory leak report, but just without any details about where in your source code they occurred.
On Microsoft Visual C++ 2010, I can remove the whole code and put just one #define NEW DEBUG_NEW in a header, and I still got the right memory leak reports, e.g.
Detected memory leaks!
Dumping objects ->
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\strcore.cpp(156) : {7508} normal block at 0x029B9598, 54 bytes long.
Data: < > E4 B8 C9 00 12 00 00 00 12 00 00 00 01 00 00 00
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\strcore.cpp(156) : {7501} normal block at 0x029B94A8, 28 bytes long.
Data: < > E4 B8 C9 00 05 00 00 00 05 00 00 00 01 00 00 00
f:\source\agent\agent\deviceid.cpp(21) : {7500} normal block at 0x029CDFC0, 8 bytes long.
Data: < > A8 95 9B 02 B8 94 9B 02
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\strcore.cpp(156) : {6786} normal block at 0x029C0D88, 160 bytes long.
Data: < G > E4 B8 C9 00 19 00 00 00 47 00 00 00 01 00 00 00
f:\source\agent\sysinfo\sysinfo.cpp(27) : {6733} normal block at 0x029B84D8, 92 bytes long.
Data: < > 00 00 00 00 00 10 00 00 00 00 01 00 FF FF FE 7F
Object dump complete.